A stochastic extra-step quasi-Newton method for nonsmooth nonconvex optimization

نویسندگان

چکیده

In this paper, a novel stochastic extra-step quasi-Newton method is developed to solve class of nonsmooth nonconvex composite optimization problems. We assume that the gradient smooth part objective function can only be approximated by oracles. The proposed combines general higher order steps derived from an underlying proximal type fixed-point equation with additional guarantee convergence. Based on suitable bounds step sizes, we establish global convergence stationary points in expectation and extension approach using variance reduction techniques discussed. Motivated large-scale big data applications, investigate coordinate-type scheme allows generate cheap tractable directions. Finally, numerical results logistic regression deep learning problems show our algorithm compares favorably other state-of-the-art methods.

برای دانلود رایگان متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Quasi-Newton Methods for Nonconvex Constrained Multiobjective Optimization

Here, a quasi-Newton algorithm for constrained multiobjective optimization is proposed. Under suitable assumptions, global convergence of the algorithm is established.

متن کامل

Stochastic Quasi-Newton Methods for Nonconvex Stochastic Optimization

In this paper we study stochastic quasi-Newton methods for nonconvex stochastic optimization, where we assume that noisy information about the gradients of the objective function is available via a stochastic first-order oracle (SFO). We propose a general framework for such methods, for which we prove almost sure convergence to stationary points and analyze its worst-case iteration complexity. ...

متن کامل

A Simple Proximal Stochastic Gradient Method for Nonsmooth Nonconvex Optimization

We analyze stochastic gradient algorithms for optimizing nonconvex, nonsmooth finite-sum problems. In particular, the objective function is given by the summation of a differentiable (possibly nonconvex) component, together with a possibly non-differentiable but convex component. We propose a proximal stochastic gradient algorithm based on variance reduction, called ProxSVRG+. The algorithm is ...

متن کامل

A Quasi-Newton Approach to Nonsmooth Convex Optimization A Quasi-Newton Approach to Nonsmooth Convex Optimization

We extend the well-known BFGS quasi-Newton method and its limited-memory variant (LBFGS) to the optimization of nonsmooth convex objectives. This is done in a rigorous fashion by generalizing three components of BFGS to subdifferentials: The local quadratic model, the identification of a descent direction, and the Wolfe line search conditions. We apply the resulting subLBFGS algorithm to L2-reg...

متن کامل

An Approximate Quasi-Newton Bundle-Type Method for Nonsmooth Optimization

and Applied Analysis 3 rate of convergence under some additional assumptions, and it should be noted that we only use the approximate values of the objective function and its subgradients which makes the algorithm easier to implement. Some notations are listed below for presenting the algorithm. (i) ∂f(x) = {ξ ∈ Rn | f(z) ≥ f(x) + ξT(z − x), ∀z ∈ R}, the subdifferential of f at x, and each such...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

ژورنال

عنوان ژورنال: Mathematical Programming

سال: 2021

ISSN: ['0025-5610', '1436-4646']

DOI: https://doi.org/10.1007/s10107-021-01629-y